Goto

Collaborating Authors

 roman yampolskiy


Are We Living In A Simulation? Can We Break Out Of It?

#artificialintelligence

Roman Yampolskiy thinks we live in a simulated universe, but that we could bust out. In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall. In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The reason for this is that if it is possible, and civilisations can become advanced without self-destructing, then there will be an enormous number of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.


016 - Guest: Roman Yampolskiy, Professor of AI Safety

#artificialintelligence

This and all episodes at: http://aiandyou.net/ .   What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over. In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? All this and our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.  


The Future of AI: Superintelligence and humans -- john koetsier

#artificialintelligence

Superintelligence: What happens in a world with AI that is hundreds or thousands of times smarter than humans? In this episode, we chat with research scientist Roman Yampolskiy. He's a professor at the University of Louisville, and his most recent book is Artificial Superintelligence: A Futuristic Approach. Subscribe wherever you find podcasts: If you listen to podcasts, here's where you can subscribe to future39 and here more interviews like this on the future. What happens in a world with AI that's hundreds or thousands of times smarter than we are? He's a professor at the University of Louisville, and his most recent book is Artificial Superintelligence: A Futuristic Approach. John Koetsier: Thank you so much for coming on the show. You have an amazing background there, I love it.


Artificial Intelligence Safety and Security (Chapman & Hall/CRC Artificial Intelligence and Robotics Series): Roman V. Yampolskiy: 9780815369820: Amazon.com: Books

#artificialintelligence

Artificial intelligence: Safety and Security is a timely and ambitious edited volume. It comprises 28 chapters organized under three distinct themes: security, artificial intelligence and safety. Edited by Roman V. Yampolskiy, the contributions are well integrated and challenge common conceptions. Yampolskiy has assembled a diverse team of leading scholars. In sum, the book provides valuable insight into the cyber ecosystem. It can be read in any order without missing the essence of the subject matter, yet the chapters speak to each other. The chapters provide insight into new research areas and experimental designs. The book is a must-read for computer scientists, security experts, mathematicians, students and individuals who are interested in learning more about the progress of the artificial intelligence field. It will also be of interest to hackers and the intelligence community.


Prof Roman Yampolskiy Superintelligence is Coming

#artificialintelligence

Want to watch this again later? Sign in to add this video to a playlist. Report Need to report the video? Sign in to report inappropriate content. Report Need to report the video?


How to create a malevolent artificial intelligence

#artificialintelligence

The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue. Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur Elon Musk have warned of the danger. Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyze the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values. But there's an important omission in this field, say independent researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky. "Nothing, to our knowledge, has been published on how to design a malevolent machine," they say.



How to Create a Malevolent Artificial Intelligence

#artificialintelligence

The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue. Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur, Elon Musk, have warned of the danger. Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyse the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values. But there's an important omission in this field, say independent researcher Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky. "Nothing, to our knowledge, has been published on how to design a malevolent machine," they say.


How to Create a Malevolent Artificial Intelligence

#artificialintelligence

The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue. Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur Elon Musk have warned of the danger. Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyze the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values. But there's an important omission in this field, say independent researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky. "Nothing, to our knowledge, has been published on how to design a malevolent machine," they say.